skip to main content


Search for: All records

Creators/Authors contains: "Barnes, Elizabeth A."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Seasonal‐to‐decadal climate prediction is crucial for decision‐making in a number of industries, but forecasts on these timescales have limited skill. Here, we develop a data‐driven method for selecting optimal analogs for seasonal‐to‐decadal analog forecasting. Using an interpretable neural network, we learn a spatially‐weighted mask that quantifies how important each grid point is for determining whether two climate states will evolve similarly. We show that analogs selected using this weighted mask provide more skillful forecasts than analogs that are selected using traditional spatially‐uniform methods. This method is tested on two prediction problems using the Max Planck Institute for Meteorology Grand Ensemble: multi‐year prediction of North Atlantic sea surface temperatures, and seasonal prediction of El Niño Southern Oscillation. This work demonstrates a methodical approach to selecting analogs that may be useful for improving seasonal‐to‐decadal forecasts and understanding their sources of skill.

     
    more » « less
  2. Abstract A simple method for adding uncertainty to neural network regression tasks in earth science via estimation of a general probability distribution is described. Specifically, we highlight the sinh-arcsinh-normal distributions as particularly well suited for neural network uncertainty estimation. The methodology supports estimation of heteroscedastic, asymmetric uncertainties by a simple modification of the network output and loss function. Method performance is demonstrated by predicting tropical cyclone intensity forecast uncertainty and by comparing two other common methods for neural network uncertainty quantification (i.e., Bayesian neural networks and Monte Carlo dropout). The simple approach described here is intuitive and applicable when no prior exists and one just wishes to parameterize the output and its uncertainty according to some previously defined family of distributions. The authors believe it will become a powerful, go-to method moving forward. 
    more » « less
  3. Abstract Convolutional neural networks (CNNs) have recently attracted great attention in geoscience due to their ability to capture non-linear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, where the ground truth of explanation of the network is known a priori , to help objectively assess their performance. Secondly, we apply XAI to a climate-related prediction setting, namely to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help towards a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems. 
    more » « less
  4. Abstract

    The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) focuses on creating trustworthy AI for a variety of environmental and Earth science phenomena. AI2ES includes leading experts from AI, atmospheric and ocean science, risk communication, and education, who work synergistically to develop and test trustworthy AI methods that transform our understanding and prediction of the environment. Trust is a social phenomenon, and our integration of risk communication research across AI2ES activities provides an empirical foundation for developing user‐informed, trustworthy AI. AI2ES also features activities to broaden participation and for workforce development that are fully integrated with AI2ES research on trustworthy AI, environmental science, and risk communication.

     
    more » « less
  5. Abstract Despite the increasingly successful application of neural networks to many problems in the geosciences, their complex and nonlinear structure makes the interpretation of their predictions difficult, which limits model trust and does not allow scientists to gain physical insights about the problem at hand. Many different methods have been introduced in the emerging field of eXplainable Artificial Intelligence (XAI), which aims at attributing the network’s prediction to specific features in the input domain. XAI methods are usually assessed by using benchmark datasets (such as MNIST or ImageNet for image classification). However, an objective, theoretically derived ground truth for the attribution is lacking for most of these datasets, making the assessment of XAI in many cases subjective. Also, benchmark datasets specifically designed for problems in geosciences are rare. Here, we provide a framework, based on the use of additively separable functions, to generate attribution benchmark datasets for regression problems for which the ground truth of the attribution is known a priori. We generate a large benchmark dataset and train a fully connected network to learn the underlying function that was used for simulation. We then compare estimated heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly. We believe that attribution benchmarks as the ones introduced herein are of great importance for further application of neural networks in the geosciences, and for more objective assessment and accurate implementation of XAI methods, which will increase model trust and assist in discovering new science. 
    more » « less
  6. Abstract

    Predictable internal climate variability on decadal timescales (2–10 years) is associated with large‐scale oceanic processes, however these predictable signals may be masked by the noisy climate system. One approach to overcoming this problem is investigating state‐dependent predictability—how differences in prediction skill depend on the initial state of the system. We present a machine learning approach to identify state‐dependent predictability on decadal timescales in the Community Earth System Model version 2 pre‐industrial control simulation by incorporating uncertainty estimates into a regression neural network. We leverage the network's prediction of uncertainty to examine state dependent predictability in sea surface temperatures by focusing on predictions with the lowest uncertainty outputs. In particular, we study two regions of the global ocean—the North Atlantic and North Pacific—and find that skillful initial states identified by the neural network correspond to particular phases of Atlantic multi‐decadal variability and the interdecadal Pacific oscillation.

     
    more » « less
  7. Abstract

    Soil moisture (SM) influences near‐surface air temperature by partitioning downwelling radiation into latent and sensible heat fluxes, through which dry soils generally lead to higher temperatures. The strength of this coupled soil moisture‐temperature (SM‐T) relationship is not spatially uniform, and numerous methods have been developed to assess SM‐T coupling strength across the globe. These methods tend to involve either idealized climate‐model experiments or linear statistical methods which cannot fully capture nonlinear SM‐T coupling. In this study, we propose a nonlinear machine‐learning (ML)‐based approach for analyzing SM‐T coupling and apply this method to various mid‐latitude regions using historical reanalysis datasets. We first train convolutional neural networks (CNNs) to predict daily maximum near‐surface air temperature (TMAX) given daily SM and geopotential height fields. We then use partial dependence analysis to isolate the average sensitivity of each CNN's TMAX prediction to the SM input under daily atmospheric conditions. The resulting SM‐T relationships broadly agree with previous assessments of SM‐T coupling strength. Over many regions, we find nonlinear relationships between the CNN's TMAX prediction and the SM input map. These nonlinearities suggest that the coupled interactions governing SM‐T relationships vary under different SM conditions, but these variations are regionally dependent. We also apply this method to test the influence of SM memory on SM‐T coupling and find that our results are consistent with previous studies. Although our study focuses specifically on local SM‐T coupling, our ML‐based method can be extended to investigate other coupled interactions within the climate system using observed or model‐derived datasets.

     
    more » « less
  8. Abstract

    Subseasonal timescales (∼2 weeks–2 months) are known for their lack of predictability, however, specific Earth system states known to have a strong influence on these timescales can be harnessed to improve prediction skill (known as “forecasts of opportunity”). As the climate continues warming, it is hypothesized these states may change and consequently, their importance for subseasonal prediction may also be impacted. Here, we examine changes to midlatitude subseasonal prediction skill provided by the tropics under anthropogenic warming using artificial neural networks to quantify skill. The network is tasked to predict the sign of the 500 hPa geopotential height for historical and future time periods in the Community Earth System Model Version 2 ‐ Large Ensemble across the Northern Hemisphere at a 3 week lead using tropical precipitation. We show prediction skill changes substantially in key midlatitude regions and these changes appear linked to changes in seasonal variability with the largest differences in accuracy occurring during forecasts of opportunity.

     
    more » « less
  9. Abstract

    Methods of explainable artificial intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of neural networks (NNs), highlighting which features in the input contribute the most to a NN prediction. Here, we discuss our “lesson learned” that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution results depend greatly on the considered baseline that the XAI method utilizes—a fact that has been overlooked in the geoscientific literature. The baseline is a reference point to which the prediction is compared so that the prediction can be understood. This baseline can be chosen by the user or is set by construction in the method’s algorithm—often without the user being aware of that choice. We highlight that different baselines can lead to different insights for different science questions and, thus, should be chosen accordingly. To illustrate the impact of the baseline, we use a large ensemble of historical and future climate simulations forced with the shared socioeconomic pathway 3-7.0 (SSP3-7.0) scenario and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions. We conclude by discussing important implications and considerations about the use of baselines in XAI research.

    Significance Statement

    In recent years, methods of explainable artificial intelligence (XAI) have found great application in geoscientific applications, because they can be used to attribute the predictions of neural networks (NNs) to the input and interpret them physically. Here, we highlight that the attributions—and the physical interpretation—depend greatly on the choice of the baseline—a fact that has been overlooked in the geoscientific literature. We illustrate this dependence for a specific climate task, in which a NN is trained to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions.

     
    more » « less